Leonard - EAKOS 2009

VAST 2009 Challenge
Challenge 3 - Video Analysis

Authors and Affiliations:

Lorne Leonard, The Pennsylvania State University - Research Computing & Cyberinfrastructure, lorne_leonard@hotmail.com [PRIMARY contact]

Tool(s):

EAKOS is a collection of tools to demonstrate how one can interface with web based visualization and GIS services. The toolset is an early prototype developed by Lorne Leonard during his spare time at the 2008 Christmas break and weekends leading up to the competition due deadline.  Lorne works with researchers and faculty at The Pennsylvania State University and he uses the toolset to demonstrate potential visualization and analytical solutions to enhance their research goals.

 

Video:

 

Leonard_Vast2009_Challenge3.mov

 

 

ANSWERS:


MC3.1: Provide a tab-delimitated table containing the location, start time and duration of the events identified above. Please name the file Video.txt and place it in the same directory as your index.htm file.  Please see the format required in the Task Descriptions.

Video.txt

 


MC3.2:  Identify any events of potential counterintelligence/espionage interest in the video.  Provide a Detailed Answer, including a description of any activities, and why the event is of interest. 

I started this challenge by breaking the video into individual frames and labeling each frame with its frame number.  Then I processed these frames by partitioning the frames into four camera positions identified by the criteria. I did this as an automatic process by keeping record of when the camera moved rapidly, which created a large number of moving objects. When the number of moving objects reached a threshold of half the image resolution I recorded the event in a text file. This process took approximately two days.

 

Using the four camera partitions, I automatically processed the frames to detect object movements (Figure 1) with three different pixel areas (30x40, 70x80 and 40x70) to detect human movement. This process took less than a day to process all 8 camera partitions (4 cameras for 2 videos). The pixel areas helped to reduce false positives that appeared due to the poor video quality and instead focused users on major events.  These results were then automatically reprocessed to map start and end sequences, the duration and the total pixel area count in order to score the significance of the event.  This process took less than an hour.

 

Figure 1: Detecting object movement.

 

After this process, the tables are loaded in the toolset and the user can filter by scores to reduce the amount of video searching. For this challenge, I sorted each camera by the pixel total count and a minimum of two seconds of video to inspect. When the user double clicks on the row event, the frames are loaded to inspect the sequence significance (Figure 2). Using this filtering technique and reducing the time range from mini challenges one and two, it took less than 30 minutes to examine possible events.  Examining the remaining portion of the videos for activities of interest, took less than 90 minutes.

 

Figure 2: Evaluate sequence for any events of potential counterintelligence

 

I identified cameras two and four as the best views to search for counterintelligence due to the image clarity of identifying people shapes in the foreground and middle ground in addition to the large number of people often in these spaces. I focused on certain areas, as marked in Figure 3, where people would congregate for short periods. Table 1 summarizes events of potential counterintelligence/espionage in the video based on with unusual behaviors.

 

However, there is clearly one event of counterintelligence/espionage act in the video (highlighted in yellow Table 1) at 11:23:41 Day 2 (Jan 26, 2008), camera 2 (Figure 4). A man is waiting, with a white briefcase, near the railings. At 11:24:53, a woman with a black briefcase is seen talking to this man. At this time, both briefcases are on the ground between them and it appears the man is talking to someone on a cell phone. At 11:29:58, the man picks up the black briefcase and starts to walk away to the left side of the camera. At 11:30:01, the woman picks up the white briefcase and walks away to the right side of the camera. From camera 3 the woman is seen walking down the sidewalk. These exact times were found manually, but detecting the event was achieved by using the filtering technique as described above.

Note: Times are based on video part 1 starting at 10AM, part 2 video starting at 8AM

 

Figure 3A: Areas of interest for

Camera 2

Figure 3B: Areas of interest for

Camera 4

 

Camera

Time

Description

Camera 4

10:55:12 (Day 1 Jan 24, 2008)

Possible exchange?

Camera 2

11:13:06 (Day 1 Jan 24, 2008)

Two people close and one appears to be whispering in the other's ear.

Camera 2

2:15:20 (Day 1 Jan 24, 2008)

Lady points in one direction, walks the opposite direction then turns around.

Camera 2

8:17:30 (Day 2 Jan 26, 2008)

Person acting strangely.

Camera 4 and 2

8:26:05 (Day 2 Jan 26, 2008)

Person is by themselves at Camera 4 and then at Camera 2 is with another person.

Camera 4

8:53:00 (Day 2 Jan 26, 2008)

One person on bike, possible carrier? 

Camera 2

9:19:32 (Day 2 Jan 26, 2008)

Person looks over railing

Camera 2

9:55:29 (Day 2 Jan 26, 2008)

Person vanishes.

Camera 4

10:32:11 (Day 2 Jan 26, 2008)

Two people talking, it appears one puts something in the others pocket.

Camera 2

11:23:41 (Day 2 Jan 26, 2008) to 11:30:04

Man at railing with a white brief case, then he starts talking to a woman. He leaves with a black brief case and she leaves with the white case.

Table 1: Possible counterintelligence/espionage events

Note: Times are based on video part 1 starting at 10AM, part 2 video starting at 8AM

 

Camera 2: 11:23:42

Camera 2: 11:24:53

Camera 2: 11:29:58

Camera 2: 11:30:05

Figure 4: Act of counterintelligence/espionage, Day 2, Camera 2.

Note: Times are based on video part 1 starting at 10AM, part 2 video starting at 8AM